Future sparse interactions: a MARL approach

نویسندگان

  • Yann-Michaël De Hauwere
  • Peter Vrancx
  • Ann Nowé
چکیده

Recent research has demonstrated that considering local interactions among agents in specific parts of the state space, is a successful way of simplifying the multi-agent learning process. By taking into account other agents only when a conflict is possible, an agent can significantly reduce the state-action space in which it learns. Current approaches, however, consider only the immediate rewards for detecting conflicts. In this paper, we contribute a reinforcement learning algorithm that learns when a strategic interaction among agents is needed, several time-steps before the conflict is reflected by the (immediate) reward signal.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning in Multi-agent Systems with Sparse Interactions by Knowledge Transfer and Game Abstraction

In many multi-agent systems, the interactions between agents are sparse and exploiting interaction sparseness in multiagent reinforcement learning (MARL) can improve the learning performance. Also, agents may have already learnt some single-agent knowledge (e.g., local value function) before the multi-agent learning process. In this work, we investigate how such knowledge can be utilized to lea...

متن کامل

Face Recognition using an Affine Sparse Coding approach

Sparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image and video. Sparse coding has increasing attraction for image classification applications in recent years. But in the cases where we have some similar images from different classes, such as face recognition applications, different images may be classified into the same class, and hen...

متن کامل

P-MARL: Prediction-Based Multi-Agent Reinforcement Learning for Non-Stationary Environments

Multi-Agent Reinforcement Learning (MARL) is a widely-used technique for optimization in decentralised control problems, addressing complex challenges when several agents change actions simultaneously and without collaboration. Such challenges are exacerbated when the environment in which the agents learn is inherently non-stationary, as agents’ actions are then non-deterministic. In this paper...

متن کامل

MARL-Ped: A multi-agent reinforcement learning based framework to simulate pedestrian groups

Pedestrian simulation is complex because there are different levels of behavior modeling. At the lowest level, local interactions between agents occur; at the middle level, strategic and tactical behaviors appear like overtakings or route choices; and at the highest level path-planning is necessary. The agent-based pedestrian simulators either focus on a specific level (mainly in the lower one)...

متن کامل

Three Perspectives on Multi-Agent Reinforcement Learning

This chapter concludes three perspectives on multi-agent reinforcement learning (MARL): (1) cooperative MARL, which performs mutual interaction between cooperative agents; (2) equilibrium-based MARL, which focuses on equilibrium solutions among gaming agents; and (3) best-response MARL, which suggests a no-regret policy against other competitive agents. Then the authors present a general framew...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011